12 research outputs found

    Modelling speaker adaptation in second language learner dialogue

    Get PDF
    Understanding how tutors and students adapt to one another within Second Language (L2) learning is an important step in the development of better automated tutoring tools for L2 conversational practice. Such an understanding can not only inform conversational agent design, but can be useful for other pedagogic applications such as formative assessment, self reflection on tutoring practice, learning analytics, and conversation modelling for personalisation and adaptation. Dialogue is a challenging domain for natural language processing, understanding, and generation. It is necessary to understand how participants adapt to their interlocutor, changing what they express and how they express it as they update their beliefs about the knowledge, preferences, and goals of the other person. While this adaptation is natural to humans, it is an open problem for dialogue systems, where managing coherence across utterances is an active area of research, even without adaptation. This thesis extends our understanding of adaptation in human dialogue, to better implement this in agent-based conversational dialogue. This is achieved through comparison to fluent conversational dialogues and across student ability levels. Specifically, we are interested in how adaptation takes place in terms of the linguistic complexity, lexical alignment and the dialogue act usage demonstrated by the speakers within the dialogue. Finally, with the end goal of an automated tutor in mind, the student alignment levels are used to compare dialogues between student and human tutor with those where the tutor is an agent. We argue that the lexical complexity, alignment and dialogue style adaptation we model in L2 human dialogue are signs of tutoring strategies in action, and hypothesise that creating agents which adapt to these aspects of dialogue will result in better environments for learning. We hypothesise that with a more adaptive agent, student alignment may increase, potentially resulting in improved engagement and learning. We find that In L2 practice dialogues, both student and tutor adapt to each other, and this adaptation depends on student ability. Tutors adapt to push students of higher ability, and to encourage students of lower ability. Complexity, dialogue act usage and alignment are used differently by speakers in L2 dialogue than within other types of conversational dialogue, and changes depending on the learner proficiency. We also find different types of learner behaviours within automated L2 tutoring dialogues to those present in human ones, using alignment to measure this. This thesis contributes new findings on interlocutor adaptation within second language practice dialogue, with an emphasis on how these can be used to improve tutoring dialogue agents

    Structural Persistence in Language Models : Priming as a Window into Abstract Language Representations

    Get PDF
    Acknowledgments We would like to thank the anonymous reviewers for their extensive and thoughtful feedback and suggestions, which greatly improved our work, as the action editor for his helpful guidance. We would also like to thank members of the ILLC past and present for their useful comments and feedback, specifically, Dieuwke Hupkes, Mario Giulianelli, Sandro Pezzelle, and Ece Takmaz. Arabella Sinclair worked on this project while affiliated with the University of Amsterdam. The project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 819455).Peer reviewedPublisher PD

    Refer, Reuse, Reduce: Generating Subsequent References in Visual and Conversational Contexts

    Get PDF
    Dialogue participants often refer to entities or situations repeatedly within a conversation, which contributes to its cohesiveness. Subsequent references exploit the common ground accumulated by the interlocutors and hence have several interesting properties, namely, they tend to be shorter and reuse expressions that were effective in previous mentions. In this paper, we tackle the generation of first and subsequent references in visually grounded dialogue. We propose a generation model that produces referring utterances grounded in both the visual and the conversational context. To assess the referring effectiveness of its output, we also implement a reference resolution system. Our experiments and analyses show that the model produces better, more effective referring utterances than a model not grounded in the dialogue context, and generates subsequent references that exhibit linguistic patterns akin to humans.Comment: In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020

    Controllable Text Generation for All Ages : Evaluating a Plug-and-Play Approach to Age-Adapted Dialogue

    Get PDF
    Funding Information: We would like to thank the four anonymous GEM reviewers for their valuable feedback and the participants of our crowdsourcing experiments. The work received funding from the University of Amsterdam’s Research Priority Area Human(e) AI and from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 819455).Publisher PD

    State-of-the-art generalisation research in NLP: a taxonomy and review

    Get PDF
    The ability to generalise well is one of the primary desiderata of natural language processing (NLP). Yet, what `good generalisation' entails and how it should be evaluated is not well understood, nor are there any common standards to evaluate it. In this paper, we aim to lay the ground-work to improve both of these issues. We present a taxonomy for characterising and understanding generalisation research in NLP, we use that taxonomy to present a comprehensive map of published generalisation studies, and we make recommendations for which areas might deserve attention in the future. Our taxonomy is based on an extensive literature review of generalisation research, and contains five axes along which studies can differ: their main motivation, the type of generalisation they aim to solve, the type of data shift they consider, the source by which this data shift is obtained, and the locus of the shift within the modelling pipeline. We use our taxonomy to classify over 400 previous papers that test generalisation, for a total of more than 600 individual experiments. Considering the results of this review, we present an in-depth analysis of the current state of generalisation research in NLP, and make recommendations for the future. Along with this paper, we release a webpage where the results of our review can be dynamically explored, and which we intend to up-date as new NLP generalisation studies are published. With this work, we aim to make steps towards making state-of-the-art generalisation testing the new status quo in NLP.Comment: 35 pages of content + 53 pages of reference

    A taxonomy and review of generalization research in NLP

    Get PDF
    Funding Information: We thank A. Williams, A. Joulin, E. Bruni, L. Weber, R. Kirk and S. Riedel for providing feedback on the various stages of this paper, and G. Marcus for providing detailed feedback on the final draft. We also thank the reviewers of our work for providing useful comments. We thank E. Hupkes for making the app that allows searching through references, and we thank D. Haziza and E. Takmaz for other contributions to the website. M.G. was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement no. 819455). V.D. was supported by the UKRI Centre for Doctoral Training in Natural Language Processing, funded by the UKRI (grant no. EP/S022481/1) and the University of Edinburgh. N.S. was supported by the Hyundai Motor Company (under the project Uncertainty in Neural Sequence Modeling) and the Samsung Advanced Institute of Technology (under the project Next Generation Deep Learning: From Pattern Recognition to AI). Publisher Copyright: © 2023, The Author(s).Peer reviewedPublisher PD
    corecore